Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Immunoinformatics (Amst) ; 13: None, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38525047

RESUMO

The vast potential sequence diversity of TCRs and their ligands has presented an historic barrier to computational prediction of TCR epitope specificity, a holy grail of quantitative immunology. One common approach is to cluster sequences together, on the assumption that similar receptors bind similar epitopes. Here, we provide the first independent evaluation of widely used clustering algorithms for TCR specificity inference, observing some variability in predictive performance between models, and marked differences in scalability. Despite these differences, we find that different algorithms produce clusters with high degrees of similarity for receptors recognising the same epitope. Our analysis strengthens the case for use of clustering models to identify signals of common specificity from large repertoires, whilst highlighting scope for improvement of complex models over simple comparators.

2.
Ultramicroscopy ; 256: 113882, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-37979542

RESUMO

Simulations of cryo-electron microscopy (cryo-EM) images of biological samples can be used to produce test datasets to support the development of instrumentation, methods, and software, as well as to assess data acquisition and analysis strategies. To be useful, these simulations need to be based on physically realistic models which include large volumes of amorphous ice. The gold standard model for EM image simulation is a physical atom-based ice model produced using molecular dynamics simulations. Although practical for small sample volumes; for simulation of cryo-EM data from large sample volumes, this can be too computationally expensive. We have evaluated a Gaussian Random Field (GRF) ice model which is shown to be more computationally efficient for large sample volumes. The simulated EM images are compared with the gold standard atom-based ice model approach and shown to be directly comparable. Comparison with experimentally acquired data shows the Gaussian random field ice model produces realistic simulations. The software required has been implemented in the Parakeet software package and the underlying atomic models are available online for use by the wider community.


Assuntos
Gelo , Software , Microscopia Crioeletrônica/métodos , Simulação de Dinâmica Molecular
3.
Histochem Cell Biol ; 160(3): 253-276, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37284846

RESUMO

Public participation in research, also known as citizen science, is being increasingly adopted for the analysis of biological volumetric data. Researchers working in this domain are applying online citizen science as a scalable distributed data analysis approach, with recent research demonstrating that non-experts can productively contribute to tasks such as the segmentation of organelles in volume electron microscopy data. This, alongside the growing challenge to rapidly process the large amounts of biological volumetric data now routinely produced, means there is increasing interest within the research community to apply online citizen science for the analysis of data in this context. Here, we synthesise core methodological principles and practices for applying citizen science for analysis of biological volumetric data. We collate and share the knowledge and experience of multiple research teams who have applied online citizen science for the analysis of volumetric biological data using the Zooniverse platform ( www.zooniverse.org ). We hope this provides inspiration and practical guidance regarding how contributor effort via online citizen science may be usefully applied in this domain.


Assuntos
Ciência do Cidadão , Humanos , Participação da Comunidade
4.
Elife ; 122023 02 21.
Artigo em Inglês | MEDLINE | ID: mdl-36805107

RESUMO

Serial focussed ion beam scanning electron microscopy (FIB/SEM) enables imaging and assessment of subcellular structures on the mesoscale (10 nm to 10 µm). When applied to vitrified samples, serial FIB/SEM is also a means to target specific structures in cells and tissues while maintaining constituents' hydration shells for in situ structural biology downstream. However, the application of serial FIB/SEM imaging of non-stained cryogenic biological samples is limited due to low contrast, curtaining, and charging artefacts. We address these challenges using a cryogenic plasma FIB/SEM. We evaluated the choice of plasma ion source and imaging regimes to produce high-quality SEM images of a range of different biological samples. Using an automated workflow we produced three-dimensional volumes of bacteria, human cells, and tissue, and calculated estimates for their resolution, typically achieving 20-50 nm. Additionally, a tag-free localisation tool for regions of interest is needed to drive the application of in situ structural biology towards tissue. The combination of serial FIB/SEM with plasma-based ion sources promises a framework for targeting specific features in bulk-frozen samples (>100 µm) to produce lamellae for cryogenic electron tomography.


Assuntos
Tomografia com Microscopia Eletrônica , Imageamento Tridimensional , Humanos , Microscopia Eletrônica de Varredura , Tomografia com Microscopia Eletrônica/métodos , Íons , Imageamento Tridimensional/métodos
5.
Nat Rev Immunol ; 23(8): 511-521, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36755161

RESUMO

Recent advances in machine learning and experimental biology have offered breakthrough solutions to problems such as protein structure prediction that were long thought to be intractable. However, despite the pivotal role of the T cell receptor (TCR) in orchestrating cellular immunity in health and disease, computational reconstruction of a reliable map from a TCR to its cognate antigens remains a holy grail of systems immunology. Current data sets are limited to a negligible fraction of the universe of possible TCR-ligand pairs, and performance of state-of-the-art predictive models wanes when applied beyond these known binders. In this Perspective article, we make the case for renewed and coordinated interdisciplinary effort to tackle the problem of predicting TCR-antigen specificity. We set out the general requirements of predictive models of antigen binding, highlight critical challenges and discuss how recent advances in digital biology such as single-cell technology and machine learning may provide possible solutions. Finally, we describe how predicting TCR specificity might contribute to our understanding of the broader puzzle of antigen immunogenicity.


Assuntos
Antígenos , Receptores de Antígenos de Linfócitos T , Humanos , Especificidade do Receptor de Antígeno de Linfócitos T , Aprendizado de Máquina , Biologia
6.
Biol Imaging ; 3: e10, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38487693

RESUMO

Electron cryo-tomography is an imaging technique for probing 3D structures with at the nanometer scale. This technique has been used extensively in the biomedical field to study the complex structures of proteins and other macromolecules. With the advancement in technology, microscopes are currently capable of producing images amounting to terabytes of data per day, posing great challenges for scientists as the speed of processing of the images cannot keep up with the ever-higher throughput of the microscopes. Therefore, automation is an essential and natural pathway on which image processing-from individual micrographs to full tomograms-is developing. In this paper, we present Ot2Rec, an open-source pipelining tool which aims to enable scientists to build their own processing workflows in a flexible and automatic manner. The basic building blocks of Ot2Rec are plugins which follow a unified application programming interface structure, making it simple for scientists to contribute to Ot2Rec by adding features which are not already available. In this paper, we also present three case studies of image processing using Ot2Rec, through which we demonstrate the speedup of using a semi-automatic workflow over a manual one, the possibility of writing and using custom (prototype) plugins, and the flexibility of Ot2Rec which enables the mix-and-match of plugins. We also demonstrate, in the Supplementary Material, a built-in reporting feature in Ot2Rec which aggregates the metadata from all process being run, and output them in the Jupyter Notebook and/or HTML formats for quick review of image processing quality. Ot2Rec can be found at https://github.com/rosalindfranklininstitute/ot2rec.

7.
Biol Imaging ; 3: e9, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38487692

RESUMO

An emergent volume electron microscopy technique called cryogenic serial plasma focused ion beam milling scanning electron microscopy (pFIB/SEM) can decipher complex biological structures by building a three-dimensional picture of biological samples at mesoscale resolution. This is achieved by collecting consecutive SEM images after successive rounds of FIB milling that expose a new surface after each milling step. Due to instrumental limitations, some image processing is necessary before 3D visualization and analysis of the data is possible. SEM images are affected by noise, drift, and charging effects, that can make precise 3D reconstruction of biological features difficult. This article presents Okapi-EM, an open-source napari plugin developed to process and analyze cryogenic serial pFIB/SEM images. Okapi-EM enables automated image registration of slices, evaluation of image quality metrics specific to pFIB-SEM imaging, and mitigation of charging artifacts. Implementation of Okapi-EM within the napari framework ensures that the tools are both user- and developer-friendly, through provision of a graphical user interface and access to Python programming.

8.
Front Cell Dev Biol ; 10: 842342, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35433703

RESUMO

As sample preparation and imaging techniques have expanded and improved to include a variety of options for larger sized and numbers of samples, the bottleneck in volumetric imaging is now data analysis. Annotation and segmentation are both common, yet difficult, data analysis tasks which are required to bring meaning to the volumetric data. The SuRVoS application has been updated and redesigned to provide access to both manual and machine learning-based segmentation and annotation techniques, including support for crowd sourced data. Combining adjacent, similar voxels (supervoxels) provides a mechanism for speeding up segmentation both in the painting of annotation and by training a segmentation model on a small amount of annotation. The support for layers allows multiple datasets to be viewed and annotated together which, for example, enables the use of correlative data (e.g. crowd-sourced annotations or secondary imaging techniques) to guide segmentation. The ability to work with larger data on high-performance servers with GPUs has been added through a client-server architecture and the Pytorch-based image processing and segmentation server is flexible and extensible, and allows the implementation of deep learning-based segmentation modules. The client side has been built around Napari allowing integration of SuRVoS into an ecosystem for open-source image analysis while the server side has been built with cloud computing and extensibility through plugins in mind. Together these improvements to SuRVoS provide a platform for accelerating the annotation and segmentation of volumetric and correlative imaging data across modalities and scales.

9.
Sci Rep ; 11(1): 23279, 2021 12 02.
Artigo em Inglês | MEDLINE | ID: mdl-34857791

RESUMO

Recently, several convolutional neural networks have been proposed not only for 2D images, but also for 3D and 4D volume segmentation. Nevertheless, due to the large data size of the latter, acquiring a sufficient amount of training annotations is much more strenuous than in 2D images. For 4D time-series tomograms, this is usually handled by segmenting the constituent tomograms independently through time with 3D convolutional neural networks. Inter-volume information is therefore not utilized, potentially leading to temporal incoherence. In this paper, we attempt to resolve this by proposing two hidden Markov model variants that refine 4D segmentation labels made by 3D convolutional neural networks working on each time point. Our models utilize not only inter-volume information, but also the prediction confidence generated by the 3D segmentation convolutional neural networks themselves. To the best of our knowledge, this is the first attempt to refine 4D segmentations made by 3D convolutional neural networks using hidden Markov models. During experiments we test our models, qualitatively, quantitatively and behaviourally, using prespecified segmentations. We demonstrate in the domain of time series tomograms which are typically undersampled to allow more frequent capture; a particularly challenging problem. Finally, our dataset and code is publicly available.

10.
J Synchrotron Radiat ; 28(Pt 6): 1985-1995, 2021 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-34738954

RESUMO

The Dual Imaging and Diffraction (DIAD) beamline at Diamond Light Source is a new dual-beam instrument for full-field imaging/tomography and powder diffraction. This instrument provides the user community with the capability to dynamically image 2D and 3D complex structures and perform phase identification and/or strain mapping using micro-diffraction. The aim is to enable in situ and in operando experiments that require spatially correlated results from both techniques, by providing measurements from the same specimen location quasi-simultaneously. Using an unusual optical layout, DIAD has two independent beams originating from one source that operate in the medium energy range (7-38 keV) and are combined at one sample position. Here, either radiography or tomography can be performed using monochromatic or pink beam, with a 1.4 mm × 1.2 mm field of view and a feature resolution of 1.2 µm. Micro-diffraction is possible with a variable beam size between 13 µm × 4 µm and 50 µm × 50 µm. One key functionality of the beamline is image-guided diffraction, a setup in which the micro-diffraction beam can be scanned over the complete area of the imaging field-of-view. This moving beam setup enables the collection of location-specific information about the phase composition and/or strains at any given position within the image/tomography field of view. The dual beam design allows fast switching between imaging and diffraction mode without the need of complicated and time-consuming mode switches. Real-time selection of areas of interest for diffraction measurements as well as the simultaneous collection of both imaging and diffraction data of (irreversible) in situ and in operando experiments are possible.

11.
Open Biol ; 11(10): 210160, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34699732

RESUMO

In cryo-electron tomography (cryo-ET) of biological samples, the quality of tomographic reconstructions can vary depending on the transmission electron microscope (TEM) instrument and data acquisition parameters. In this paper, we present Parakeet, a 'digital twin' software pipeline for the assessment of the impact of various TEM experiment parameters on the quality of three-dimensional tomographic reconstructions. The Parakeet digital twin is a digital model that can be used to optimize the performance and utilization of a physical instrument to enable in silico optimization of sample geometries, data acquisition schemes and instrument parameters. The digital twin performs virtual sample generation, TEM image simulation, and tilt series reconstruction and analysis within a convenient software framework. As well as being able to produce physically realistic simulated cryo-ET datasets to aid the development of tomographic reconstruction and subtomogram averaging programs, Parakeet aims to enable convenient assessment of the effects of different microscope parameters and data acquisition parameters on reconstruction quality. To illustrate the use of the software, we present the example of a quantitative analysis of missing wedge artefacts on simulated planar and cylindrical biological samples and discuss how data collection parameters can be modified for cylindrical samples where a full 180° tilt range might be measured.


Assuntos
Tomografia com Microscopia Eletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos , Proteínas/ultraestrutura , Simulação por Computador , Bases de Dados de Proteínas , Tomografia com Microscopia Eletrônica/instrumentação , Software
12.
J Synchrotron Radiat ; 28(Pt 3): 889-901, 2021 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-33949996

RESUMO

In this paper a practical solution for the reconstruction and segmentation of low-contrast X-ray tomographic data of protein crystals from the long-wavelength macromolecular crystallography beamline I23 at Diamond Light Source is provided. The resulting segmented data will provide the path lengths through both diffracting and non-diffracting materials as basis for analytical absorption corrections for X-ray diffraction data taken in the same sample environment ahead of the tomography experiment. X-ray tomography data from protein crystals can be difficult to analyse due to very low or absent contrast between the different materials: the crystal, the sample holder and the surrounding mother liquor. The proposed data processing pipeline consists of two major sequential operations: model-based iterative reconstruction to improve contrast and minimize the influence of noise and artefacts, followed by segmentation. The segmentation aims to partition the reconstructed data into four phases: the crystal, mother liquor, loop and vacuum. In this study three different semi-automated segmentation methods are experimented with by using Gaussian mixture models, geodesic distance thresholding and a novel morphological method, RegionGrow, implemented specifically for the task. The complete reconstruction-segmentation pipeline is integrated into the MPI-based data analysis and reconstruction framework Savu, which is used to reduce computation time through parallelization across a computing cluster and makes the developed methods easily accessible.

13.
J Synchrotron Radiat ; 26(Pt 3): 839-853, 2019 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-31074449

RESUMO

X-ray computed tomography and, specifically, time-resolved volumetric tomography data collections (4D datasets) routinely produce terabytes of data, which need to be effectively processed after capture. This is often complicated due to the high rate of data collection required to capture at sufficient time-resolution events of interest in a time-series, compelling the researchers to perform data collection with a low number of projections for each tomogram in order to achieve the desired `frame rate'. It is common practice to collect a representative tomogram with many projections, after or before the time-critical portion of the experiment without detrimentally affecting the time-series to aid the analysis process. For this paper these highly sampled data are used to aid feature detection in the rapidly collected tomograms by assisting with the upsampling of their projections, which is equivalent to upscaling the θ-axis of the sinograms. In this paper, a super-resolution approach is proposed based on deep learning (termed an upscaling Deep Neural Network, or UDNN) that aims to upscale the sinogram space of individual tomograms in a 4D dataset of a sample. This is done using learned behaviour from a dataset containing a high number of projections, taken of the same sample and occurring at the beginning or the end of the data collection. The prior provided by the highly sampled tomogram allows the application of an upscaling process with better accuracy than existing interpolation techniques. This upscaling process subsequently permits an increase in the quality of the tomogram's reconstruction, especially in situations that require capture of only a limited number of projections, as is the case in high-frequency time-series capture. The increase in quality can prove very helpful for researchers, as downstream it enables easier segmentation of the tomograms in areas of interest, for example. The method itself comprises a convolutional neural network which through training learns an end-to-end mapping between sinograms with a low and a high number of projections. Since datasets can differ greatly between experiments, this approach specifically develops a lightweight network that can easily and quickly be retrained for different types of samples. As part of the evaluation of our technique, results with different hyperparameter settings are presented, and the method has been tested on both synthetic and real-world data. In addition, accompanying real-world experimental datasets have been released in the form of two 80 GB tomograms depicting a metallic pin that undergoes corruption from a droplet of salt water. Also a new engineering-based phantom dataset has been produced and released, inspired by the experimental datasets.

14.
J Synchrotron Radiat ; 25(Pt 4): 998-1009, 2018 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-29979161

RESUMO

This manuscript presents the current status and technical details of the Spectroscopy Village at Diamond Light Source. The Village is formed of four beamlines: I18, B18, I20-Scanning and I20-EDE. The village provides the UK community with local access to a hard X-ray microprobe, a quick-scanning multi-purpose XAS beamline, a high-intensity beamline for X-ray absorption spectroscopy of dilute samples and X-ray emission spectroscopy, and an energy-dispersive extended X-ray absorption fine-structure beamline. The optics of B18, I20-scanning and I20-EDE are detailed; moreover, recent developments on the four beamlines, including new detector hardware and changes in acquisition software, are described.

15.
J Vis Exp ; (126)2017 08 23.
Artigo em Inglês | MEDLINE | ID: mdl-28872144

RESUMO

Segmentation is the process of isolating specific regions or objects within an imaged volume, so that further study can be undertaken on these areas of interest. When considering the analysis of complex biological systems, the segmentation of three-dimensional image data is a time consuming and labor intensive step. With the increased availability of many imaging modalities and with automated data collection schemes, this poses an increased challenge for the modern experimental biologist to move from data to knowledge. This publication describes the use of SuRVoS Workbench, a program designed to address these issues by providing methods to semi-automatically segment complex biological volumetric data. Three datasets of differing magnification and imaging modalities are presented here, each highlighting different strategies of segmenting with SuRVoS. Phase contrast X-ray tomography (microCT) of the fruiting body of a plant is used to demonstrate segmentation using model training, cryo electron tomography (cryoET) of human platelets is used to demonstrate segmentation using super- and megavoxels, and cryo soft X-ray tomography (cryoSXT) of a mammalian cell line is used to demonstrate the label splitting tools. Strategies and parameters for each datatype are also presented. By blending a selection of semi-automatic processes into a single interactive tool, SuRVoS provides several benefits. Overall time to segment volumetric data is reduced by a factor of five when compared to manual segmentation, a mainstay in many image processing fields. This is a significant savings when full manual segmentation can take weeks of effort. Additionally, subjectivity is addressed through the use of computationally identified boundaries, and splitting complex collections of objects by their calculated properties rather than on a case-by-case basis.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos
16.
J Struct Biol ; 198(1): 43-53, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28246039

RESUMO

Segmentation of biological volumes is a crucial step needed to fully analyse their scientific content. Not having access to convenient tools with which to segment or annotate the data means many biological volumes remain under-utilised. Automatic segmentation of biological volumes is still a very challenging research field, and current methods usually require a large amount of manually-produced training data to deliver a high-quality segmentation. However, the complex appearance of cellular features and the high variance from one sample to another, along with the time-consuming work of manually labelling complete volumes, makes the required training data very scarce or non-existent. Thus, fully automatic approaches are often infeasible for many practical applications. With the aim of unifying the segmentation power of automatic approaches with the user expertise and ability to manually annotate biological samples, we present a new workbench named SuRVoS (Super-Region Volume Segmentation). Within this software, a volume to be segmented is first partitioned into hierarchical segmentation layers (named Super-Regions) and is then interactively segmented with the user's knowledge input in the form of training annotations. SuRVoS first learns from and then extends user inputs to the rest of the volume, while using Super-Regions for quicker and easier segmentation than when using a voxel grid. These benefits are especially noticeable on noisy, low-dose, biological datasets.


Assuntos
Conjuntos de Dados como Assunto , Software , Algoritmos , Curadoria de Dados/métodos , Aprendizado de Máquina
17.
J Synchrotron Radiat ; 24(Pt 1): 248-256, 2017 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-28009564

RESUMO

With the development of fourth-generation high-brightness synchrotrons on the horizon, the already large volume of data that will be collected on imaging and mapping beamlines is set to increase by orders of magnitude. As such, an easy and accessible way of dealing with such large datasets as quickly as possible is required in order to be able to address the core scientific problems during the experimental data collection. Savu is an accessible and flexible big data processing framework that is able to deal with both the variety and the volume of data of multimodal and multidimensional scientific datasets output such as those from chemical tomography experiments on the I18 microfocus scanning beamline at Diamond Light Source.

18.
Philos Trans A Math Phys Eng Sci ; 373(2043)2015 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-25939626

RESUMO

Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an 'orthogonal' fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and 'facility-independent': it can run on standard cluster infrastructure at any institution.

19.
J Synchrotron Radiat ; 22(3): 853-8, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25931106

RESUMO

Synchrotron light source facilities worldwide generate terabytes of data in numerous incompatible data formats from a wide range of experiment types. The Data Analysis WorkbeNch (DAWN) was developed to address the challenge of providing a single visualization and analysis platform for data from any synchrotron experiment (including single-crystal and powder diffraction, tomography and spectroscopy), whilst also being sufficiently extensible for new specific use case analysis environments to be incorporated (e.g. ARPES, PEEM). In this work, the history and current state of DAWN are presented, with two case studies to demonstrate specific functionality. The first is an example of a data processing and reduction problem using the generic tools, whilst the second shows how these tools can be targeted to a specific scientific area.

20.
J Synchrotron Radiat ; 22(3): 828-38, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25931103

RESUMO

I12 is the Joint Engineering, Environmental and Processing (JEEP) beamline, constructed during Phase II of the Diamond Light Source. I12 is located on a short (5 m) straight section of the Diamond storage ring and uses a 4.2 T superconducting wiggler to provide polychromatic and monochromatic X-rays in the energy range 50-150 keV. The beam energy enables good penetration through large or dense samples, combined with a large beam size (1 mrad horizontally × 0.3 mrad vertically). The beam characteristics permit the study of materials and processes inside environmental chambers without unacceptable attenuation of the beam and without the need to use sample sizes which are atypically small for the process under study. X-ray techniques available to users are radiography, tomography, energy-dispersive diffraction, monochromatic and white-beam two-dimensional diffraction/scattering and small-angle X-ray scattering. Since commencing operations in November 2009, I12 has established a broad user community in materials science and processing, chemical processing, biomedical engineering, civil engineering, environmental science, palaeontology and physics.


Assuntos
Cristalografia por Raios X/instrumentação , Lasers , Aceleradores de Partículas/instrumentação , Espectrometria por Raios X/instrumentação , Raios X , Transferência de Energia , Desenho de Equipamento , Análise de Falha de Equipamento , Iluminação/instrumentação , Reino Unido
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...